381 research outputs found

    Automatic Estimation of the Exposure to Lateral Collision in Signalized Intersections using Video Sensors

    Full text link
    Intersections constitute one of the most dangerous elements in road systems. Traffic signals remain the most common way to control traffic at high-volume intersections and offer many opportunities to apply intelligent transportation systems to make traffic more efficient and safe. This paper describes an automated method to estimate the temporal exposure of road users crossing the conflict zone to lateral collision with road users originating from a different approach. This component is part of a larger system relying on video sensors to provide queue lengths and spatial occupancy that are used for real time traffic control and monitoring. The method is evaluated on data collected during a real world experiment

    Tracking in Urban Traffic Scenes from Background Subtraction and Object Detection

    Full text link
    In this paper, we propose to combine detections from background subtraction and from a multiclass object detector for multiple object tracking (MOT) in urban traffic scenes. These objects are associated across frames using spatial, colour and class label information, and trajectory prediction is evaluated to yield the final MOT outputs. The proposed method was tested on the Urban tracker dataset and shows competitive performances compared to state-of-the-art approaches. Results show that the integration of different detection inputs remains a challenging task that greatly affects the MOT performance

    Background subtraction based on Local Shape

    Full text link
    We present a novel approach to background subtraction that is based on the local shape of small image regions. In our approach, an image region centered on a pixel is mod-eled using the local self-similarity descriptor. We aim at obtaining a reliable change detection based on local shape change in an image when foreground objects are moving. The method first builds a background model and compares the local self-similarities between the background model and the subsequent frames to distinguish background and foreground objects. Post-processing is then used to refine the boundaries of moving objects. Results show that this approach is promising as the foregrounds obtained are com-plete, although they often include shadows.Comment: 4 pages, 5 figures, 3 tabl

    Improving Multiple Object Tracking with Optical Flow and Edge Preprocessing

    Full text link
    In this paper, we present a new method for detecting road users in an urban environment which leads to an improvement in multiple object tracking. Our method takes as an input a foreground image and improves the object detection and segmentation. This new image can be used as an input to trackers that use foreground blobs from background subtraction. The first step is to create foreground images for all the frames in an urban video. Then, starting from the original blobs of the foreground image, we merge the blobs that are close to one another and that have similar optical flow. The next step is extracting the edges of the different objects to detect multiple objects that might be very close (and be merged in the same blob) and to adjust the size of the original blobs. At the same time, we use the optical flow to detect occlusion of objects that are moving in opposite directions. Finally, we make a decision on which information we keep in order to construct a new foreground image with blobs that can be used for tracking. The system is validated on four videos of an urban traffic dataset. Our method improves the recall and precision metrics for the object detection task compared to the vanilla background subtraction method and improves the CLEAR MOT metrics in the tracking tasks for most videos

    Road User Detection in Videos

    Get PDF
    Successive frames of a video are highly redundant, and the most popular object detection methods do not take advantage of this fact. Using multiple consecutive frames can improve detection of small objects or difficult examples and can improve speed and detection consistency in a video sequence, for instance by interpolating features between frames. In this work, a novel approach is introduced to perform online video object detection using two consecutive frames of video sequences involving road users. Two new models, RetinaNet-Double and RetinaNet-Flow, are proposed, based respectively on the concatenation of a target frame with a preceding frame, and the concatenation of the optical flow with the target frame. The models are trained and evaluated on three public datasets. Experiments show that using a preceding frame improves performance over single frame detectors, but using explicit optical flow usually does not

    Road User Detection in Videos

    Full text link
    Successive frames of a video are highly redundant, and the most popular object detection methods do not take advantage of this fact. Using multiple consecutive frames can improve detection of small objects or difficult examples and can improve speed and detection consistency in a video sequence, for instance by interpolating features between frames. In this work, a novel approach is introduced to perform online video object detection using two consecutive frames of video sequences involving road users. Two new models, RetinaNet-Double and RetinaNet-Flow, are proposed, based respectively on the concatenation of a target frame with a preceding frame, and the concatenation of the optical flow with the target frame. The models are trained and evaluated on three public datasets. Experiments show that using a preceding frame improves performance over single frame detectors, but using explicit optical flow usually does not

    Cyclist-Pedestrian Cohabitation in Seasonal Pedestrian Streets

    Get PDF
    There is a renewed fĂĽcus on active modes of transportation given their multiple advantages, whether fĂĽr human health or the environment in general. Interest has grown especially in 2020 after the COVID-19 pandemic, when several cities quicldy implemented temporary facilities for walking and cycling in the context of physical distancing. Several measures piggyback.ed on existing programs such as the Montreal initiative for complete streets ('rues conviviales' or 'social/festive streets'') that selects streets each year fĂĽr pilot projects and a final design implementation over a three-year period This resulted in seasonal pedestrianization of about ten streets each year since 2020. Though active transportation brings together pedestrians and cyclists und.er a large umbrella, these users have very different characteristics and tbere may be conflicts of use if mixed in tbe same space. Cycling is thus generally forbidden on pedestrian streets. Despite these rules, there is cycling traffic on pedestrian streets as cyclists also enjoy car-free facilities, especially when pedestrian traffic is low, which generates complaints by pedestrians. To reconcile and help botb categories of users coexist, two Montreal boroughs tried a new rule in the Summer of 2021, to 1et cyclists bik.e at walldng speed on pedestrian streets while avoiding conflicts with pedestrians. There are few studies on cyclist-pedestrian interactions, and, to the best ofthe authors' knowledge, none on interactions in pedestrian streets. This work aims to study the coexistence or cohabitation of pedestrians and cyclists in several pedestrian streets through video-based analysis. Data were collected at several sites and on several days during the Summer of 2021 along three different pedestrian streets, two of them. allowing cycling, to assess how cyclists and pedestrians interact, whether cycling is allowed or not

    Automated Shuttles as Traffic Calming: Evidence from a Pilot Study in City Traffic

    Get PDF
    Discourse about the real-world effects of automated vehicles has intensified over the last decade, but few observational studies have been made examining their integration in real traffic. This research is based on the dataset prepared by Beauchamp et al. in [1] where video footage from two pilot projects involving automated shuttles in Montreal and Candiac in 2019 was analyzed to compute safety indicators from road user trajectories. The study showed that automated shuttles have safer interactions with other road users compared to human drivers following the same trajectories. Yet, this may not be the only characteristic of automated shuttles. These vehicles are notoriously slow, 10 to 15 km/h slower than human-driven cars in city traffic [1], which on city streets is bound to influence other road users, in particular following cars. lt is therefore hypothesized that automated shuttles may have a traflic calming effect, slowing other motorized vehicles [2]. Slower speed and the predictability of automated shuttles, obeying the rules of the road and yielding more willingly to vulnerable road users (pedestrians and cyclists) may also have an impact on these users' behavior [3]: for example, cyclists may pass the shuttle, pedestrians may cross outside of crosswalks. The present study aims to explore the potential effects of automated shuttles, with their slower spceds and more predictable behavior, on the behavior of other road users. [from Introduction

    Autocamera Calibration for traffic surveillance cameras with wide angle lenses

    Full text link
    We propose a method for automatic calibration of a traffic surveillance camera with wide-angle lenses. Video footage of a few minutes is sufficient for the entire calibration process to take place. This method takes in the height of the camera from the ground plane as the only user input to overcome the scale ambiguity. The calibration is performed in two stages, 1. Intrinsic Calibration 2. Extrinsic Calibration. Intrinsic calibration is achieved by assuming an equidistant fisheye distortion and an ideal camera model. Extrinsic calibration is accomplished by estimating the two vanishing points, on the ground plane, from the motion of vehicles at perpendicular intersections. The first stage of intrinsic calibration is also valid for thermal cameras. Experiments have been conducted to demonstrate the effectiveness of this approach on visible as well as thermal cameras. Index Terms: fish-eye, calibration, thermal camera, intelligent transportation systems, vanishing point

    A Grid-based Representation for Human Action Recognition

    Full text link
    Human action recognition (HAR) in videos is a fundamental research topic in computer vision. It consists mainly in understanding actions performed by humans based on a sequence of visual observations. In recent years, HAR have witnessed significant progress, especially with the emergence of deep learning models. However, most of existing approaches for action recognition rely on information that is not always relevant for this task, and are limited in the way they fuse the temporal information. In this paper, we propose a novel method for human action recognition that encodes efficiently the most discriminative appearance information of an action with explicit attention on representative pose features, into a new compact grid representation. Our GRAR (Grid-based Representation for Action Recognition) method is tested on several benchmark datasets demonstrating that our model can accurately recognize human actions, despite intra-class appearance variations and occlusion challenges.Comment: Accepted on 25th International Conference on Pattern Recognition (ICPR 2020
    • …
    corecore